Skip to content

add fused linear-loss function in Domino #965

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

duanhx1037
Copy link

Changes

Add a fused and chunked function for linear and cross-entropy loss computation in Domino, based on [Liger-Kernel](https://github.com/linkedin/Liger-Kernel).

Effect on memory usage

Reduce training memory usage, especially peak memory usage in the vocabulary layer. Using a setup of num-layers=4, seq-length=512, batch-size=8 in training/DeepSpeed-Domino/pretrain_gpt3_2.7b.sh, the average memory usage (GB) measured by torch.cuda.max_memory_allocated() in each training iteration will drop from 6.158 to 5.0458.

Effect on loss

Almost identical loss curve in a 1000-iteration experiment.
image

@duanhx1037 duanhx1037 requested a review from tjruwase as a code owner April 8, 2025 19:36
@GuanhuaWang GuanhuaWang self-requested a review April 8, 2025 20:30
@GuanhuaWang
Copy link

Hi @duanhx1037 ,

thx for this pr. Please solve above:

  1. DCO issue
  2. formatting issue with guide here https://github.com/deepspeedai/DeepSpeed/blob/master/CONTRIBUTING.md

@GuanhuaWang GuanhuaWang self-assigned this Apr 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants